85 research outputs found

    Twenty-Two New Approximate Proof Labeling Schemes

    Get PDF
    Introduced by Korman, Kutten, and Peleg (Distributed Computing 2005), a proof labeling scheme (PLS) is a system dedicated to verifying that a given configuration graph satisfies a certain property. It is composed of a centralized prover, whose role is to generate a proof for yes-instances in the form of an assignment of labels to the nodes, and a distributed verifier, whose role is to verify the validity of the proof by local means and accept it if and only if the property is satisfied. To overcome lower bounds on the label size of PLSs for certain graph properties, Censor-Hillel, Paz, and Perry (SIROCCO 2017) introduced the notion of an approximate proof labeling scheme (APLS) that allows the verifier to accept also some no-instances as long as they are not "too far" from satisfying the property. The goal of the current paper is to advance our understanding of the power and limitations of APLSs. To this end, we formulate the notion of APLSs in terms of distributed graph optimization problems (OptDGPs) and develop two generic methods for the design of APLSs. These methods are then applied to various classic OptDGPs, obtaining twenty-two new APLSs. An appealing characteristic of our APLSs is that they are all sequentially efficient in the sense that both the prover and the verifier are required to run in (sequential) polynomial time. On the negative side, we establish "combinatorial" lower bounds on the label size for some of the aforementioned OptDGPs that demonstrate the optimality of our corresponding APLSs. For other OptDGPs, we establish conditional lower bounds that exploit the sequential efficiency of the verifier alone (under the assumption that NP ? co-NP) or that of both the verifier and the prover (under the assumption that P ? NP, with and without the unique games conjecture)

    Towards Distributed Two-Stage Stochastic Optimization

    Get PDF
    The weighted vertex cover problem is concerned with selecting a subset of the vertices that covers a target set of edges with the objective of minimizing the total cost of the selected vertices. We consider a variant of this classic combinatorial optimization problem where the target edge set is not fully known; rather, it is characterized by a probability distribution. Adhering to the model of two-stage stochastic optimization, the execution is divided into two stages so that in the first stage, the decision maker selects some of the vertices based on the probabilistic forecast of the target edge set. Then, in the second stage, the edges in the target set are revealed and in order to cover them, the decision maker can augment the vertex subset selected in the first stage with additional vertices. However, in the second stage, the vertex cost increases by some inflation factor, so the second stage selection becomes more expensive. The current paper studies the two-stage stochastic vertex cover problem in the realm of distributed graph algorithms, where the decision making process (in both stages) is distributed among the vertices of the graph. By combining the stochastic optimization toolbox with recent advances in distributed algorithms for weighted vertex cover, we develop an algorithm that runs in time O(log (?) / ?), sends O(m) messages in total, and guarantees to approximate the optimal solution within a (3 + ?)-ratio, where m is the number of edges in the graph, ? is its maximum degree, and 0 < ? < 1 is a performance parameter

    Low Diameter Graph Decompositions by Approximate Distance Computation

    Get PDF
    In many models for large-scale computation, decomposition of the problem is key to efficient algorithms. For distance-related graph problems, it is often crucial that such a decomposition results in clusters of small diameter, while the probability that an edge is cut by the decomposition scales linearly with the length of the edge. There is a large body of literature on low diameter graph decomposition with small edge cutting probabilities, with all existing techniques heavily building on single source shortest paths (SSSP) computations. Unfortunately, in many theoretical models for large-scale computations, the SSSP task constitutes a complexity bottleneck. Therefore, it is desirable to replace exact SSSP computations with approximate ones. However this imposes a fundamental challenge since the existing constructions of low diameter graph decomposition with small edge cutting probabilities inherently rely on the subtractive form of the triangle inequality, which fails to hold under distance approximation. The current paper overcomes this obstacle by developing a technique termed blurry ball growing. By combining this technique with a clever algorithmic idea of Miller et al. (SPAA 2013), we obtain a construction of low diameter decompositions with small edge cutting probabilities which replaces exact SSSP computations by (a small number of) approximate ones. The utility of our approach is showcased by deriving efficient algorithms that work in the CONGEST, PRAM, and semi-streaming models of computation. As an application, we obtain metric tree embedding algorithms in the vein of Bartal (FOCS 1996) whose computational complexities in these models are optimal up to polylogarithmic factors. Our embeddings have the additional useful property that the tree can be mapped back to the original graph such that each edge is "used" only logaritmically many times, which is of interest for capacitated problems and simulating CONGEST algorithms on the tree into which the graph is embedded

    Online Disjoint Set Cover Without Prior Knowledge

    Get PDF
    The disjoint set cover (DSC) problem is a fundamental combinatorial optimization problem concerned with partitioning the (hyper)edges of a hypergraph into (pairwise disjoint) clusters so that the number of clusters that cover all nodes is maximized. In its online version, the edges arrive one-by-one and should be assigned to clusters in an irrevocable fashion without knowing the future edges. This paper investigates the competitiveness of online DSC algorithms. Specifically, we develop the first (randomized) online DSC algorithm that guarantees a poly-logarithmic (O(log^{2} n)) competitive ratio without prior knowledge of the hypergraph\u27s minimum degree. On the negative side, we prove that the competitive ratio of any randomized online DSC algorithm must be at least Omega((log n)/(log log n)) (even if the online algorithm does know the minimum degree in advance), thus establishing the first lower bound on the competitive ratio of randomized online DSC algorithms

    Design of Self-Stabilizing Approximation Algorithms via a Primal-Dual Approach

    Get PDF
    Self-stabilization is an important concept in the realm of fault-tolerant distributed computing. In this paper, we propose a new approach that relies on the properties of linear programming duality to obtain self-stabilizing approximation algorithms for distributed graph optimization problems. The power of this new approach is demonstrated by the following results: - A self-stabilizing 2(1+?)-approximation algorithm for minimum weight vertex cover that converges in O(log? /(?log log ?)) synchronous rounds. - A self-stabilizing ?-approximation algorithm for maximum weight independent set that converges in O(?+log^* n) synchronous rounds. - A self-stabilizing ((2?+1)(1+?))-approximation algorithm for minimum weight dominating set in ?-arboricity graphs that converges in O((log?)/?) synchronous rounds. In all of the above, ? denotes the maximum degree. Our technique improves upon previous results in terms of time complexity while incurring only an additive O(log n) overhead to the message size. In addition, to the best of our knowledge, we provide the first self-stabilizing algorithms for the weighted versions of minimum vertex cover and maximum independent set

    Locally Restricted Proof Labeling Schemes

    Get PDF
    • …
    corecore